23 research outputs found

    First steps towards an imprecise Poisson process

    Get PDF
    The Poisson process is the most elementary continuous-time stochastic process that models a stream of repeating events. It is uniquely characterised by a single parameter called the rate. Instead of a single value for this rate, we here consider a rate interval and let it characterise two nested sets of stochastic processes. We call these two sets of stochastic process imprecise Poisson processes, explain why this is justified, and study the corresponding lower and upper (conditional) expectations. Besides a general theoretical framework, we also provide practical methods to compute lower and upper (conditional) expectations of functions that depend on the number of events at a single point in time

    Imprecise continuous-time Markov chains : efficient computational methods with guaranteed error bounds

    Get PDF
    Imprecise continuous-time Markov chains are a robust type of continuous-time Markov chains that allow for partially specified time-dependent parameters. Computing inferences for them requires the solution of a non-linear differential equation. As there is no general analytical expression for this solution, efficient numerical approximation methods are essential to the applicability of this model. We here improve the uniform approximation method of Krak et al. (2016) in two ways and propose a novel and more efficient adaptive approximation method. For ergodic chains, we also provide a method that allows us to approximate stationary distributions up to any desired maximal error

    Bounding inferences for large-scale continuous-time Markov chains : a new approach based on lumping and imprecise Markov chains

    Get PDF
    If the state space of a homogeneous continuous-time Markov chain is too large, making inferences becomes computationally infeasible. Fortunately, the state space of such a chain is usually too detailed for the inferences we are interested in, in the sense that a less detailed—smaller—state space suffices to unambiguously formalise the inference. However, in general this so-called lumped state space inhibits computing exact inferences because the corresponding dynamics are unknown and/or intractable to obtain. We address this issue by considering an imprecise continuous-time Markov chain. In this way, we are able to provide guaranteed lower and upper bounds for the inferences of interest, without suffering from the curse of dimensionality

    First Steps Towards an Imprecise Poisson Process

    Get PDF
    The Poisson process is the most elementary continuous-time stochastic process that models a stream of repeating events. It is uniquely characterised by a single parameter called the rate. Instead of a single value for this rate, we here consider a rate interval and let it characterise two nested sets of stochastic processes. We call these two sets of stochastic process imprecise Poisson processes, explain why this is justified, and study the corresponding lower and upper (conditional) expectations. Besides a general theoretical framework, we also provide practical methods to compute lower and upper (conditional) expectations of functions that depend on the number of events at a single point in time.Comment: Extended pre-print of a paper accepted for presentation at ISIPTA 201

    Imprecise Markov Models for Scalable and Robust Performance Evaluation of Flexi-Grid Spectrum Allocation Policies

    Get PDF
    The possibility of flexibly assigning spectrum resources with channels of different sizes greatly improves the spectral efficiency of optical networks, but can also lead to unwanted spectrum fragmentation.We study this problem in a scenario where traffic demands are categorised in two types (low or high bit-rate) by assessing the performance of three allocation policies. Our first contribution consists of exact Markov chain models for these allocation policies, which allow us to numerically compute the relevant performance measures. However, these exact models do not scale to large systems, in the sense that the computations required to determine the blocking probabilities---which measure the performance of the allocation policies---become intractable. In order to address this, we first extend an approximate reduced-state Markov chain model that is available in the literature to the three considered allocation policies. These reduced-state Markov chain models allow us to tractably compute approximations of the blocking probabilities, but the accuracy of these approximations cannot be easily verified. Our main contribution then is the introduction of reduced-state imprecise Markov chain models that allow us to derive guaranteed lower and upper bounds on blocking probabilities, for the three allocation policies separately or for all possible allocation policies simultaneously.Comment: 16 pages, 7 figures, 3 table

    Modelling Spectrum Assignment in a Two-Service Flexi-Grid Optical Link with Imprecise Continuous-Time Markov Chains

    Get PDF
    Flexi-grid optical networks (Gerstel et al., 2012) are a novel paradigm for managing the capacity of optical fibers more efficiently. The idea is to divide the spectrum in small frequency slices, and to consider an allocation policy that adaptively assigns one or multiple contiguous slices to incoming bandwidth requests, depending on their size. However, as new requests arrive and old requests are served and return resources to the free pool, the spectrum might become fragmented, leading to inefficiency and unfairness. It is therefore necessary to quantify the performance of a given spectrum allocation policy, for example by determining the probability that a bandwidth request is blocked, in the sense that it cannot be allocated because there are not enough contiguous free slices. To determine blocking probabilities for an optical link with traffic requests of two different sizes and a random allocation policy, Kim et al. (2015) use a Markov chain. Unfortunately, the number of states of this Markov chain grows exponentially with the number of available frequency slices, making it infeasible to determine blocking probabilities for large systems. Therefore, Kim et al. (2015) also consider a second Markov chain, with a highly reduced state space and approximate transition rates, to obtain approximations of these blocking probabilities. In this contribution, we first show how to construct such full and reduced-state Markov chains for two other allocation policies, and compare these with the random policy. Next, we introduce a so-called imprecise Markov chain, which has the same reduced state space but imprecise (interval-valued) transition rates, and explain how it can be used to determine guaranteed upper and lower bounds for --- instead of approximations of --- blocking probabilities, for different families of allocation policies

    Handling the state space explosion of Markov chains : how lumping introduces imprecision (almost) inevitably

    Get PDF
    Markov Chains (MCs) are used ubiquitously to model dynamical systems with uncertain dynamics. In many cases, the number of states that is required to accurately describe the dynamics of such a system grows exponentially with respect to the dimensions of the system, a well-known phenomenon that is called *state space explosion*. This limits the applicability of MC models to systems with relatively small dimensions. One way to reduce the number of states of a MC is to *lump* together states, for instance because they correspond to the same higher-order description. This lumping yields a reduced stochastic process, which, at least for a given initial distribution, is an inhomogeneous MC. However, in general, determining its (time-dependent) dynamics---and hence also the temporal evolution of the probability distribution over the lumps---is impossible without first determining the temporal evolution of the distribution of the states of the original MC. Therefore, so far, this approach was limited to the special case where the reduced MC is homogeneous, because its constant dynamics are then easily determined (Tian and Kannan, 2006). We here extend this approach by showing that, in general, the unknown dynamics of the reduced MC can be characterised by a so-called imprecise MC, in the sense that it provides a guaranteed outer approximation of the reduced MC. Using this imprecise MC, it becomes possible to draw approximate inferences about the original MC, without suffering from the state space explosion problem. We focus here on inferences about its steady-state probability distribution. First, we study how the ergodic properties of the original MC translate to the reduced imprecise MC, and then use the methods outlined in (Erreygers and De Bock, 2017) to provide bounds on the expectation operator that corresponds to the steady-state distribution of the former. Second, we also propose an alternative method to determine (possibly different) bounds on those same expectations, the strength of which is the subject of ongoing research

    Optimal control of linear systems with quadratic cost and imprecise forward irrelevant input noise

    Get PDF

    LQ optimal control for partially specified input noise

    Get PDF
    We consider the problem of controlling a discrete-time scalar linear system that is subject to stochastic input noise, using a state-feedback control policy whose performance is measured by means of a quadratic cost. If the stochastic model for the noise is specified exactly, a control policy is said to be optimal if it minimises the expected value of the cost. For independent noise, such an optimal control policy is well known to be unique, and consists of a combination of state feedback and noise feedforward. Our first contribution consists in showing that this result remains true for dependent noise. However, in this generalised case, computing the optimal control policy is intractable. Next, our main contribution consists in additionally dropping the assumption that the stochastic model for the noise is specified exactly. Basically, we impose local bounds on the expectation of the noise, and consider the set of all (possibly dependent) stochastic models that are compatible with these bounds. In this context, we call a control policy optimal if it minimises the expected value of the cost for at least one of these compatible noise models. We show that any such optimal control policy consists of the same state feedback term and a possibly different noise feedforward term, and we derive backwards recursive expressions that provide tight bounds on these noise feedforward terms. These bounds are easy to compute, and the recursive expressions turn out to be very familiar
    corecore